Parameter-Efficient Fine-Tuning Method for Task-Oriented Dialogue Systems

نویسندگان

چکیده

The use of Transformer-based pre-trained language models has become prevalent in enhancing the performance task-oriented dialogue systems. These models, which are on large text data to grasp syntax and semantics, fine-tune entire parameter set according a specific task. However, as scale model increases, several challenges arise during fine-tuning process. For example, training time escalates grows, since complete needs be trained. Furthermore, additional storage space is required accommodate larger size. To address these challenges, we propose new system called PEFTTOD. Our proposal leverages method Parameter-Efficient Fine-Tuning (PEFT), incorporates an Adapter Layer prefix tuning into model. It significantly reduces overall count used efficiently transfers knowledge. We evaluated PEFTTOD Multi-WOZ 2.0 dataset, benchmark dataset commonly Compared traditional method, utilizes only about 4% parameters for training, resulting improvement combined score compared existing T5-based baseline. Moreover, achieved efficiency gain by reducing 20% saving up 95% space.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Fine Grained Knowledge Transfer for Personalized Task-oriented Dialogue Systems

Training a personalized dialogue system requires a lot of data, and the data collected for a single user is usually insufficient. One common practice for this problem is to share training dialogues between different users and train multiple sequence-to-sequence dialogue models together with transfer learning. However, current sequence-to-sequence transfer learning models operate on the entire s...

متن کامل

Efficient and Robust Parameter Tuning for Heuristic Algorithms

The main advantage of heuristic or metaheuristic algorithms compared to exact optimization methods is their ability in handling large-scale instances within a reasonable time, albeit at the expense of losing a guarantee for achieving the optimal solution. Therefore, metaheuristic techniques are appropriate choices for solving NP-hard problems to near optimality. Since the parameters of heuristi...

متن کامل

Modeling Task-Oriented Dialogue

A common tool for improving the performance quality of natural language processing systems is the use of contextual information for disambiguation. Here I describe the use of a finite state machine (FSM) to disambiguate speech acts in a machine translation system. The FSM has two layers that model, respectively, the global and local structures found in naturally-occurring conversations. The FSM...

متن کامل

BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for Task-Oriented Dialogue Systems

We present a new algorithm that significantly improves the efficiency of exploration for deep Q-learning agents in dialogue systems. Our agents explore via Thompson sampling, drawing Monte Carlo samples from a Bayes-by-Backprop neural network. Our algorithm learns much faster than common exploration strategies such as -greedy, Boltzmann, bootstrapping, and intrinsic-reward-based ones. Additiona...

متن کامل

Task-oriented Dialogue Agent Architecture

AbstrAct: This paper focuses our agent-based approach to dialogue management equipped with a deliberation mechanism further extensible with additional features. The aim of this study has been to create a manager which can exhibit complex behaviour from two points of view – the development and runtime ones. To support the first case, the domain developer needs to provide a set of plans which tel...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematics

سال: 2023

ISSN: ['2227-7390']

DOI: https://doi.org/10.3390/math11143048